Your browser doesn't support javascript.
Show: 20 | 50 | 100
Results 1 - 20 de 29
Filter
Add filters

Journal
Document Type
Year range
1.
Interactive Learning Environments ; : No Pagination Specified, 2023.
Article in English | APA PsycInfo | ID: covidwho-20245175

ABSTRACT

Mobile application developers rely largely on user reviews for identifying issues in mobile applications and meeting the users' expectations. User reviews are unstructured, unorganized and very informal. Identifying and classifying issues by extracting required information from reviews is difficult due to a large number of reviews. To automate the process of classifying reviews many researchers have adopted machine learning approaches. Keeping in view, the rising demand for educational applications, especially during COVID-19, this research aims to automate Android application education reviews' classification and sentiment analysis using natural language processing and machine learning techniques. A baseline corpus comprising 13,000 records has been built by collecting reviews of more than 20 educational applications. The reviews were then manually labelled with respect to sentiment and issue types mentioned in each review. User reviews are classified into eight categories and various machine learning algorithms are applied to classify users' sentiments and issues of applications. The results demonstrate that our proposed framework achieved an accuracy of 97% for sentiment identification and an accuracy of 94% in classifying the most significant issues. Moreover, the interpretability of the model is verified by using the explainable artificial intelligence technique of local interpretable model-agnostic explanations. (PsycInfo Database Record (c) 2023 APA, all rights reserved)

2.
IEEE Transactions on Computational Social Systems ; 10(3):1356-1371, 2023.
Article in English | Scopus | ID: covidwho-20237593

ABSTRACT

Online social networks are at the limelight of the public debate, where antagonistic groups compete to impose conflicting narratives and polarize the discussions. This article proposes an approach for measuring network polarization and political sectarianism in Twitter based on user interaction networks. Centrality metrics identify a small group of influential users (polarizers and unpolarizers) who influence a larger group of users (polarizees and unpolarizees) according to their ideological stance (left, right, and undefined). This network polarization is computed by the Bayesian probability using typical actions such as following, tweeting, retweeting, and replying. The measurement of political sectarianism also uses Bayesian probability and words extracted from the tweets to quantify the intensity of othering, aversion, and moralization in the debate. We collected Twitter data from 33 conflicted political events in Brazil during 2020, strongly influenced by the COVID-19 pandemic. Based on our methodology and polarization score, our results reveal that the approach based on user interaction networks leads to an increasing understanding of polarized conflicts in Twitter. Also, a small number of polarizers is enough to represent the polarization and sectarianism of Twitter events. © 2014 IEEE.

3.
Pan Afr Med J ; 44: 132, 2023.
Article in English | MEDLINE | ID: covidwho-2312496

ABSTRACT

One of the rare consequences of COVID-19 is increasing blood carbon dioxide, which can lead to unconsciousness, dysrhythmia, and cardiac arrest. Therefore, in COVID-19 hypercarbia, non-invasive ventilation (with Bi-level Positive Airway Pressure, BiPAP) is recommended for treatment. If CO2 does not decrease or continues rising, the patient's trachea must be intubated for supportive hyperventilation with a ventilator (Invasive ventilation). The high morbidity and mortality rate of mechanical ventilation is an important problem of invasive ventilation. We launched an innovative treatment of hypercapnia without invasive ventilation to reduce morbidity and mortality. This new approach could open the window for researchers and therapists to reduce COVID death. To investigate the cause of hypercapnia, we measured the carbon dioxide of the airways (mask and tubes of the ventilator) with a capnograph. Increased carbon dioxide inside the mask and tubes of the device was found in a severely hypercapnic COVID patient in the Intensive Care Unit (ICU). She had a 120kg weight and diabetes disease. Her PaCO2 was 138mmHg. In this condition, she had to be under invasive ventilation and accept its complication or lethal risk but we decreased her PaCO2 with the placement of a soda lime canister in the expiratory pathway to absorb CO2 from the mask and ventilation tube. Her PaCO2 dropped from 138 to 80, and the patient woke up from drowsiness completely without invasive ventilation, the next day. This innovative method continued until PaCO2 reached 55 and she was discharged home 14 days later after curing her COVID. Soda lime is used for carbon dioxide absorption in anesthesia machines and we can research its application in hypercarbia state in ICU to postpone invasive ventilation for treatment of hypercapnia.


Subject(s)
COVID-19 , Hypercapnia , Humans , Female , Hypercapnia/etiology , Hypercapnia/therapy , Carbon Dioxide , COVID-19/therapy , Oxides
4.
Diagnostics (Basel) ; 13(9)2023 May 05.
Article in English | MEDLINE | ID: covidwho-2316351

ABSTRACT

In this research, we demonstrate a Deep Convolutional Neural Network-based classification model for the detection of monkeypox. Monkeypox can be difficult to diagnose clinically in its early stages since it resembles both chickenpox and measles in symptoms. The early diagnosis of monkeypox helps doctors cure it more quickly. Therefore, pre-trained models are frequently used in the diagnosis of monkeypox, because the manual analysis of a large number of images is labor-intensive and prone to inaccuracy. Therefore, finding the monkeypox virus requires an automated process. The large layer count of convolutional neural network (CNN) architectures enables them to successfully conceptualize the features on their own, thereby contributing to better performance in image classification. The scientific community has recently articulated significant attention in employing artificial intelligence (AI) to diagnose monkeypox from digital skin images due primarily to AI's success in COVID-19 identification. The VGG16, VGG19, ResNet50, ResNet101, DenseNet201, and AlexNet models were used in our proposed method to classify patients with monkeypox symptoms with other diseases of a similar kind (chickenpox, measles, and normal). The majority of images in our research are collected from publicly available datasets. This study suggests an adaptive k-means clustering image segmentation technique that delivers precise segmentation results with straightforward operation. Our preliminary computational findings reveal that the proposed model could accurately detect patients with monkeypox. The best overall accuracy achieved by ResNet101 is 94.25%, with an AUC of 98.59%. Additionally, we describe the categorization of our model utilizing feature extraction using Local Interpretable Model-Agnostic Explanations (LIME), which provides a more in-depth understanding of particular properties that distinguish the monkeypox virus.

5.
IEEE Transactions on Artificial Intelligence ; 4(2):242-254, 2023.
Article in English | Scopus | ID: covidwho-2306664

ABSTRACT

Since the onset of the COVID-19 pandemic in 2019, many clinical prognostic scoring tools have been proposed or developed to aid clinicians in the disposition and severity assessment of pneumonia. However, there is limited work that focuses on explaining techniques that are best suited for clinicians in their decision making. In this article, we present a new image explainability method named ensemble AI explainability (XAI), which is based on the SHAP and Grad-CAM++ methods. It provides a visual explanation for a deep learning prognostic model that predicts the mortality risk of community-acquired pneumonia and COVID-19 respiratory infected patients. In addition, we surveyed the existing literature and compiled prevailing quantitative and qualitative metrics to systematically review the efficacy of ensemble XAI, and to make comparisons with several state-of-the-art explainability methods (LIME, SHAP, saliency map, Grad-CAM, Grad-CAM++). Our quantitative experimental results have shown that ensemble XAI has a comparable absence impact (decision impact: 0.72, confident impact: 0.24). Our qualitative experiment, in which a panel of three radiologists were involved to evaluate the degree of concordance and trust in the algorithms, has showed that ensemble XAI has localization effectiveness (mean set accordance precision: 0.52, mean set accordance recall: 0.57, mean set F1: 0.50, mean set IOU: 0.36) and is the most trusted method by the panel of radiologists (mean vote: 70.2%). Finally, the deep learning interpretation dashboard used for the radiologist panel voting will be made available to the community. Our code is available at https://github.com/IHIS-HealthInsights/Interpretation-Methods-Voting-dashboard. © 2020 IEEE.

6.
Big Data Mining and Analytics ; 6(3):381-389, 2023.
Article in English | Scopus | ID: covidwho-2301238

ABSTRACT

The speed of spread of Coronavirus Disease 2019 led to global lockdowns and disruptions in the academic sector. The study examined the impact of mobile technology on physics education during lockdowns. Data were collected through an online survey and later evaluated using regression tools, frequency, and an analysis of variance (ANOVA). The findings revealed that the usage of mobile technology had statistically significant effects on physics instructors' and students' academics during the coronavirus lockdown. Most of the participants admitted that the use of mobile technologies such as smartphones, laptops, PDAs, Zoom, mobile apps, etc. were very useful and helpful for continued education amid the pandemic restrictions. Online teaching is very effective during lock-down with smartphones and laptops on different platforms. The paper brings the limelight to the growing power of mobile technology solutions in physics education. © 2018 Tsinghua University Press.

7.
SN Comput Sci ; 4(4): 326, 2023.
Article in English | MEDLINE | ID: covidwho-2290682

ABSTRACT

COVID-19 has been a global pandemic. Flattening the curve requires intensive testing, and the world has been facing a shortage of testing equipment and medical personnel with expertise. There is a need to automate and aid the detection process. Several diagnostic tools are currently being used for COVID-19, including X-Rays and CT-scans. This study focuses on detecting COVID-19 from X-Rays. We pursue two types of problems: binary classification (COVID-19 and No COVID-19) and multi-class classification (COVID-19, No COVID-19 and Pneumonia). We examine and evaluate several classic models, namely VGG19, ResNet50, MobileNetV2, InceptionV3, Xception, DenseNet121, and specialized models such as DarkCOVIDNet and COVID-Net and prove that ResNet50 models perform best. We also propose a simple modification to the ResNet50 model, which gives a binary classification accuracy of 99.20% and a multi-class classification accuracy of 86.13%, hence cementing the ResNet50's abilities for COVID-19 detection and ability to differentiate pneumonia and COVID-19. The proposed model's explanations were interpreted via LIME which provides contours, and Grad-CAM, which provides heat-maps over the area(s) of interest of the classifier, i.e., COVID-19 concentrated regions in the lungs, and realize that LIME explains the results better. These explanations support our model's ability to generalize. The proposed model is intended to be deployed for free use.

8.
6th IEEE International Conference on Computational System and Information Technology for Sustainable Solutions, CSITSS 2022 ; 2022.
Article in English | Scopus | ID: covidwho-2274227

ABSTRACT

Artificial Intelligence is becoming more advanced with increasing complexity in generating the predictions and as a result it is becoming more challenging for the users to understand and retrace how the algorithm is predicting the outcomes. Artificial intelligence has also been contributing in making decisions. There are many flowers in the world so the botanist scientists need help in identifying or recognizing which type of flower. The paper presents an x-ray diagnostic model and the explained with Local interpretable model-agnostic explanations LIME method. The model is trained with various COVID as well as non-COVID images. Whereas chest X-rays are segmented to extract the lungs and the model predictions are tested with perturbated images that are generated using LIME. This paper opens a wide area of research in the field of XAI. © 2022 IEEE.

9.
IEEE Access ; : 1-1, 2023.
Article in English | Scopus | ID: covidwho-2264984

ABSTRACT

Web Information Processing (WIP) has enormously impacted modern society since a huge percentage of the population relies on the internet to acquire information. Social Media platforms provide a channel for disseminating information and a breeding ground for spreading misinformation, creating confusion and fear among the population. One of the techniques for the detection of misinformation is machine learning-based models. However, due to the availability of multiple social media platforms, developing and training AI-based models has become a tedious job. Despite multiple efforts to develop machine learning-based methods for identifying misinformation, there has been very limited work on developing an explainable generalized detector capable of robust detection and generating explanations beyond black-box outcomes. Knowing the reasoning behind the outcomes is essential to make the detector trustworthy. Hence employing explainable AI techniques is of utmost importance. In this work, the integration of two machine learning approaches, namely domain adaptation and explainable AI, is proposed to address these two issues of generalized detection and explainability. Firstly the Domain Adversarial Neural Network (DANN) develops a generalized misinformation detector across multiple social media platforms. DANN is employed to generate the classification results for test domains with relevant but unseen data. The DANN-based model, a traditional black-box model, cannot justify and explain its outcome, i.e., the labels for the target domain. Hence a Local Interpretable Model-Agnostic Explanations (LIME) explainable AI model is applied to explain the outcome of the DANN model. To demonstrate these two approaches and their integration for effective explainable generalized detection, COVID-19 misinformation is considered a case study. We experimented with two datasets and compared results with and without DANN implementation. It is observed that using DANN significantly improves the F1 score of classification and increases the accuracy by 5% and AUC by 11%. The results show that the proposed framework performs well in the case of domain shift and can learn domain-invariant features while explaining the target labels with LIME implementation. This can enable trustworthy information processing and extraction to combat misinformation effectively. Author

10.
5th IEEE International Conference on Advances in Science and Technology, ICAST 2022 ; : 133-136, 2022.
Article in English | Scopus | ID: covidwho-2264285

ABSTRACT

The emergence of the coronavirus COVID- 19 switched the limelight onto digital health technologies. To help the infection rates from surging, numerous governments are looking into applications that could help disrupt infection chains beforehand. We created a Self-Assessment Test using COVID Symptoms, that's capable of assessing the threat of COVID- 19 in the user using ML. The data also tracks the user and gives safety tips and recommendations. Using the Track Module, the user is notified of the nearby containment zones. The contact tracing module helps the user to maintain a specified distance from others. © 2022 IEEE.

11.
Int J Environ Res Public Health ; 20(5)2023 02 28.
Article in English | MEDLINE | ID: covidwho-2254578

ABSTRACT

In the last few years, many types of research have been conducted on the most harmful pandemic, COVID-19. Machine learning approaches have been applied to investigate chest X-rays of COVID-19 patients in many respects. This study focuses on the deep learning algorithm from the standpoint of feature space and similarity analysis. Firstly, we utilized Local Interpretable Model-agnostic Explanations (LIME) to justify the necessity of the region of interest (ROI) process and further prepared ROI via U-Net segmentation that masked out non-lung areas of images to prevent the classifier from being distracted by irrelevant features. The experimental results were promising, with detection performance reaching an overall accuracy of 95.5%, a sensitivity of 98.4%, a precision of 94.7%, and an F1 score of 96.5% on the COVID-19 category. Secondly, we applied similarity analysis to identify outliers and further provided an objective confidence reference specific to the similarity distance to centers or boundaries of clusters while inferring. Finally, the experimental results suggested putting more effort into enhancing the low-accuracy subspace locally, which is identified by the similarity distance to the centers. The experimental results were promising, and based on those perspectives, our approach could be more flexible to deploy dedicated classifiers specific to different subspaces instead of one rigid end-to-end black box model for all feature space.


Subject(s)
COVID-19 , Datasets as Topic , Deep Learning , X-Rays , Humans , Algorithms , Mass Chest X-Ray
12.
Comput Biol Med ; 154: 106619, 2023 03.
Article in English | MEDLINE | ID: covidwho-2220589

ABSTRACT

AIM: COVID-19 has revealed the need for fast and reliable methods to assist clinicians in diagnosing the disease. This article presents a model that applies explainable artificial intelligence (XAI) methods based on machine learning techniques on COVID-19 metagenomic next-generation sequencing (mNGS) samples. METHODS: In the data set used in the study, there are 15,979 gene expressions of 234 patients with COVID-19 negative 141 (60.3%) and COVID-19 positive 93 (39.7%). The least absolute shrinkage and selection operator (LASSO) method was applied to select genes associated with COVID-19. Support Vector Machine - Synthetic Minority Oversampling Technique (SVM-SMOTE) method was used to handle the class imbalance problem. Logistics regression (LR), SVM, random forest (RF), and extreme gradient boosting (XGBoost) methods were constructed to predict COVID-19. An explainable approach based on local interpretable model-agnostic explanations (LIME) and SHAPley Additive exPlanations (SHAP) methods was applied to determine COVID-19- associated biomarker candidate genes and improve the final model's interpretability. RESULTS: For the diagnosis of COVID-19, the XGBoost (accuracy: 0.930) model outperformed the RF (accuracy: 0.912), SVM (accuracy: 0.877), and LR (accuracy: 0.912) models. As a result of the SHAP, the three most important genes associated with COVID-19 were IFI27, LGR6, and FAM83A. The results of LIME showed that especially the high level of IFI27 gene expression contributed to increasing the probability of positive class. CONCLUSIONS: The proposed model (XGBoost) was able to predict COVID-19 successfully. The results show that machine learning combined with LIME and SHAP can explain the biomarker prediction for COVID-19 and provide clinicians with an intuitive understanding and interpretability of the impact of risk factors in the model.


Subject(s)
Artificial Intelligence , COVID-19 , Humans , COVID-19/diagnosis , COVID-19/genetics , Genetic Markers , Risk Factors , Neoplasm Proteins
13.
17th International Meeting on Computational Intelligence Methods for Bioinformatics and Biostatistics, CIBB 2021 ; 13483 LNBI:185-199, 2022.
Article in English | Scopus | ID: covidwho-2173777

ABSTRACT

The pandemic of COVID-19 has had a significant impact on global health and is becoming a major international concern. Fortunately, early detection helped decrease its number of deaths. Artificial Intelligence (AI) and Machine Learning (ML) techniques are a new era, where the main objective is no longer to assist experts in decision-making but to improve and increase their capabilities and this is where interpretability comes in. This study aims to address one of the biggest hurdles that AI faces today which is public trust and acceptance due to its black-box strategy. In this paper, we use a deep Convolutional Neural Network (CNN) on chest computed tomography (CT) image data and Support Vector Machine (SVM) and Random Forest (RF) on clinical symptoms data (Bio-data) to diagnose patients positive for COVID-19. Our objective is to present an Explainable AI (XAI) models by using the Local Interpretable Model-agnostic Explanations (LIME) technique to identify positive patients to the virus in an interpreted way. The results are promising and outperformed the state of the art. The CNN model reached an Accuracy and F1-Score of 96% on CT-scan images, and SVM outperformed RF with Accuracy of 90% and Specificity of 91% on Bio-data. The interpretable results of XAI-Img-Model and XAI-Bio-Model, show that LIME explanations help to understand how SVM and CNN black box models behave in making their decision after being trained on different types of COVID-19 dataset. This can significantly increase trust and help experts understand and learn new patterns for the current pandemic. © 2022, The Author(s), under exclusive license to Springer Nature Switzerland AG.

14.
Cancers (Basel) ; 15(1)2023 Jan 03.
Article in English | MEDLINE | ID: covidwho-2166268

ABSTRACT

Explainable Artificial Intelligence is a key component of artificially intelligent systems that aim to explain the classification results. The classification results explanation is essential for automatic disease diagnosis in healthcare. The human respiration system is badly affected by different chest pulmonary diseases. Automatic classification and explanation can be used to detect these lung diseases. In this paper, we introduced a CNN-based transfer learning-based approach for automatically explaining pulmonary diseases, i.e., edema, tuberculosis, nodules, and pneumonia from chest radiographs. Among these pulmonary diseases, pneumonia, which COVID-19 causes, is deadly; therefore, radiographs of COVID-19 are used for the explanation task. We used the ResNet50 neural network and trained the network on extensive training with the COVID-CT dataset and the COVIDNet dataset. The interpretable model LIME is used for the explanation of classification results. Lime highlights the input image's important features for generating the classification result. We evaluated the explanation using radiologists' highlighted images and identified that our model highlights and explains the same regions. We achieved improved classification results with our fine-tuned model with an accuracy of 93% and 97%, respectively. The analysis of our results indicates that this research not only improves the classification results but also provides an explanation of pulmonary diseases with advanced deep-learning methods. This research would assist radiologists with automatic disease detection and explanations, which are used to make clinical decisions and assist in diagnosing and treating pulmonary diseases in the early stage.

15.
Comput Biol Med ; 150: 106156, 2022 Oct 03.
Article in English | MEDLINE | ID: covidwho-2061033

ABSTRACT

Chest X-ray (CXR) images are considered useful to monitor and investigate a variety of pulmonary disorders such as COVID-19, Pneumonia, and Tuberculosis (TB). With recent technological advancements, such diseases may now be recognized more precisely using computer-assisted diagnostics. Without compromising the classification accuracy and better feature extraction, deep learning (DL) model to predict four different categories is proposed in this study. The proposed model is validated with publicly available datasets of 7132 chest x-ray (CXR) images. Furthermore, results are interpreted and explained using Gradient-weighted Class Activation Mapping (Grad-CAM), Local Interpretable Modelagnostic Explanation (LIME), and SHapley Additive exPlanation (SHAP) for better understandably. Initially, convolution features are extracted to collect high-level object-based information. Next, shapely values from SHAP, predictability results from LIME, and heatmap from Grad-CAM are used to explore the black-box approach of the DL model, achieving average test accuracy of 94.31 ± 1.01% and validation accuracy of 94.54 ± 1.33 for 10-fold cross validation. Finally, in order to validate the model and qualify medical risk, medical sensations of classification are taken to consolidate the explanations generated from the eXplainable Artificial Intelligence (XAI) framework. The results suggest that XAI and DL models give clinicians/medical professionals persuasive and coherent conclusions related to the detection and categorization of COVID-19, Pneumonia, and TB.

16.
3rd Workshop on Gender Equality, Diversity, and Inclusion in Software Engineering, GEICSE 2022 ; : 59-66, 2022.
Article in English | Scopus | ID: covidwho-2053351

ABSTRACT

Women are underrepresented in Science, Technology, Engineering and Mathematics (STEM). There are many initiatives which have been implemented in efforts to change this imbalance, including in primary, secondary and third-level institutions. Some are supported by governments, for example, by Science Foundation Ireland in Ireland, by professional bodies, such as IEEE, and by companies. Initiatives are targeted at STEM in general, and at subsets of the discipline. In fact, there are many STEM intervention programmes worldwide from which we in software engineering can learn. The logistics around planning and implementing a STEM intervention programme are many. This is compounded when a programme must quickly pivot and change how it is provided due to external factors. While this paper presents our experience on one STEM intervention, the University of Limerick-Lero/Johnson & Johnson WiSTEM2D (Women in STEM, Manufacturing and Design) programme, it also discusses and describes the challenges and the opportunities that became apparent when it had to completely change how it was deployed and implemented due to the COVID-19 pandemic. CCS CONCEPTS • Social and Professional Topics • Professional Topics • Computing Education ACM Reference format: Marie Travers, Ita Richardson and Linda Higgins. 2021. Challenges and Opportunities when Deploying a Gender STEM Intervention During a Pandemic. In Proceedings of GE@ICSE conference (GE@ICSE'22). GE@ICSE, Pittsburgh. USA, 8 pages. https://doi.org/10.1145/3524501.3527596 © 2022 ACM.

17.
3rd Workshop on Intelligent Data - From Data to Knowledge, DOING 2022, 1st Workshop on Knowledge Graphs Analysis on a Large Scale, K-GALS 2022, 4th Workshop on Modern Approaches in Data Engineering and Information System Design, MADEISD 2022, 2nd Workshop on Advanced Data Systems Management, Engineering, and Analytics, MegaData 2022, 2nd Workshop on Semantic Web and Ontology Design for Cultural Heritage, SWODCH 2022 and Doctoral Consortium which accompanied 26th European Conference on Advances in Databases and Information Systems, ADBIS 2022 ; 1652 CCIS:14-23, 2022.
Article in English | Scopus | ID: covidwho-2048129

ABSTRACT

During the COVID-19 pandemic, the misinformation problem arose again through social networks, like a harmful health advice and false solutions epidemic. In Brazil, one of the primary sources of misinformation is the messaging application WhatsApp. Thus, the automatic misinformation detection (MID) about COVID-19 in Brazilian Portuguese WhatsApp messages becomes a crucial challenge. Recently, some works presented different MID approaches for this purpose. Despite this success, most explored MID models remain complex black boxes. So, their internal logic and inner workings are hidden from users, which cannot fully understand why a MID model assessed a particular WhatsApp message as misinformation or not. Thus, in this article, we explore a post-hoc interpretability method called LIME to explain the predictions of MID approaches. Besides, we apply a textual analysis tool called LIWC to analyze WhatsApp messages’ linguistic characteristics and identify psychological aspects present in misinformation and non-misinformation messages. The results indicate that it is feasible to understand relevant aspects of the MID model’s predictions and find patterns on WhatsApp messages about COVID19. So, we hope that these findings help to understand the misinformation phenomena about COVID-19 in WhatsApp messages. © 2022, Springer Nature Switzerland AG.

18.
Citrus Fruit (Second Edition) ; : 1-21, 2023.
Article in English | ScienceDirect | ID: covidwho-2003776

ABSTRACT

Citrus fruits are grown in more than 140 countries around the world. Total citrus production is nearing the mark of 145 million tons (present production 143.75 million tons in 2019–20 which was 100 million tons during 2007–08). Including sweet oranges, mandarins, lemons, limes, grapefruit, pummelo, their hybrids, locally grown numerous species, tart fruits and citrons, citrus is the largest fruit industry in the world. The second edition of “Citrus fruit—Biology, Technology and Evaluation” contains 533 additional references apart from previous 1400 references, thus making it very rich and comprehensive single source of information on the subject. Updated data and statistics of production and area including an overview of latest citrus postharvest management are covered in the introduction chapter. This edition covers two new chapters—“Alternative strategies for postharvest disease management” and “Impact of climate change and covid-19 pandemic on citrus industry”.

19.
7th IEEE International conference for Convergence in Technology, I2CT 2022 ; 2022.
Article in English | Scopus | ID: covidwho-1992603

ABSTRACT

This work proposed a unified approach to increase the explainability of the predictions made by Convolution Neural Networks (CNNs) on medical images using currently available Explainable Artificial Intelligent (XAI) techniques. This method in-cooperates multiple techniques such as LISA aka Local Interpretable Model Agnostic Explanations (LIME), integrated gradients, Anchors and Shapley Additive Explanations (SHAP) which is Shapley values-based approach to provide explanations for the predictions provided by Blackbox models. This unified method increases the confidence in the black-box model's decision to be employed in crucial applications under the supervision of human specialists. In this work, a Chest X-ray (CXR) classification model for identifying Covid-19 patients is trained using transfer learning to illustrate the applicability of XAI techniques and the unified method (LISA) to explain model predictions. To derive predictions, an image-net based Inception V2 model is utilized as the transfer learning model. © 2022 IEEE.

20.
Foods ; 11(10)2022 May 21.
Article in English | MEDLINE | ID: covidwho-1953157

ABSTRACT

During the COVID-19 crisis, customers' preference in having food delivered to their doorstep instead of waiting in a restaurant has propelled the growth of food delivery services (FDSs). With all restaurants going online and bringing FDSs onboard, such as UberEATS, Menulog or Deliveroo, customer reviews on online platforms have become an important source of information about the company's performance. FDS organisations aim to gather complaints from customer feedback and effectively use the data to determine the areas for improvement to enhance customer satisfaction. This work aimed to review machine learning (ML) and deep learning (DL) models and explainable artificial intelligence (XAI) methods to predict customer sentiments in the FDS domain. A literature review revealed the wide usage of lexicon-based and ML techniques for predicting sentiments through customer reviews in FDS. However, limited studies applying DL techniques were found due to the lack of the model interpretability and explainability of the decisions made. The key findings of this systematic review are as follows: 77% of the models are non-interpretable in nature, and organisations can argue for the explainability and trust in the system. DL models in other domains perform well in terms of accuracy but lack explainability, which can be achieved with XAI implementation. Future research should focus on implementing DL models for sentiment analysis in the FDS domain and incorporating XAI techniques to bring out the explainability of the models.

SELECTION OF CITATIONS
SEARCH DETAIL